Medical image quality assessment (MIQA) is a vital prerequisite in various medical image analysis applications. Most existing MIQA algorithms are fully supervised that request a large amount of annotated data. However, annotating medical images is time-consuming and labor-intensive. In this paper, we propose an unsupervised anomaly-aware framework with test-time clustering for optical coherence tomography angiography (OCTA) image quality assessment in a setting wherein only a set of high-quality samples are accessible in the training phase. Specifically, a feature-embedding-based low-quality representation module is proposed to quantify the quality of OCTA images and then to discriminate between outstanding quality and non-outstanding quality. Within the non-outstanding quality class, to further distinguish gradable images from ungradable ones, we perform dimension reduction and clustering of multi-scale image features extracted by the trained OCTA quality representation network. Extensive experiments are conducted on one publicly accessible dataset sOCTA-3*3-10k, with superiority of our proposed framework being successfully established.
translated by 谷歌翻译
卷积神经网络已广泛应用于医学图像分割,并取得了相当大的性能。但是,性能可能会受到训练数据(源域)和测试数据(目标域)之间域间隙的显着影响。为了解决此问题,我们提出了一种基于数据操作的域泛化方法,称为域概括(AADG)的自动增强。我们的AADG框架可以有效地采样数据增强策略,从而产生新的领域并从适当的搜索空间中多样化训练集。具体而言,我们介绍了一项新的代理任务,以最大程度地提高了多个增强新颖的域之间的多样性,该域通过单位球体空间中的凹痕距离来衡量,从而使自动化的增强可牵引。对抗性训练和深入的强化学习有效地搜索了目标。全面执行了11个公开底部的底面图像数据集的定量和定性实验(四个用于视网膜血管分割,四个用于视盘和杯子和杯(OD/OC)分割(OD/OC)分割,视网膜病变细分进行了三个)。两个用于视网膜脉管系统分割的八八个数据集进一步涉及验证跨模式泛化。我们提出的AADG通过视网膜船,OD/OC和病变细分任务的相当大的利润来表现出最新的概括性能,并优于现有方法。学到的政策在经验上得到了证实为模型不平衡,并且可以很好地转移到其他模型中。源代码可在https://github.com/crazorback/aadg上找到。
translated by 谷歌翻译
轻巧的飞行时间(TOF)深度传感器很小,便宜,低能量,并且已在移动设备上大量部署在移动设备上,以进行自动对焦,障碍物检测等。但是,由于其特定的测量值(深度分布)在某个像素时的区域而不是深度值,并且分辨率极低,它们不足以用于需要高保真深度(例如3D重建)的应用。在本文中,我们提出了Deltar,这是一种新颖的方法,可以通过与颜色图像合作来赋予高分辨率和准确深度的能力。作为Deltar的核心,提出了一种用于深度分布的特征提取器,并提出了基于注意力的神经体系结构,以有效地从颜色和TOF域中融合信息。为了在现实世界中评估我们的系统,我们设计了一个数据收集设备,并提出了一种校准RGB摄像头和TOF传感器的新方法。实验表明,我们的方法比旨在使用商品级RGB-D传感器的PAR性能实现的现有框架比现有的框架产生更准确的深度。代码和数据可在https://zju3dv.github.io/deltar/上获得。
translated by 谷歌翻译
稀疏的一般矩阵乘法(SPGEMM)是许多科学应用中的基本构件。 SPGEMM的一项关键任务是计算或预测有效的内存分配和负载平衡的输出矩阵的结构(即,每个输出行的非零元素的数量),这会影响SPGEMM的整体性能。现有工作要么精确地计算出输出结构,要么采用基于上限或采样的方法来预测输出结构。但是,这些方法要么需要太多执行时间,要么不够准确。在本文中,我们提出了一种基于采样的新方法,与现有基于采样的方法相比,具有更好的精度和低成本。该方法首先通过利用中间产品的数量(表示为flop)和同一采样结果矩阵的非零元素(表示为NNZ)来预测SPGEMM的压缩比。然后,通过将每次输出行除以预测的压缩率来获得预测的输出结构。我们还建议使用优化的计算开销的基于采样的方法的参考设计,以证明所提出的方法的准确性。我们构建具有各种矩阵维度和稀疏结构的625个测试用例,以评估预测准确性。实验结果表明,在最坏的情况下,所提出方法和参考设计的绝对相对误差分别为1.56 \%和8.12 \%,分别为25 \%和156 \%。
translated by 谷歌翻译
虽然相机和激光雷达在大多数辅助和自主驾驶系统中广泛使用,但仅提出了少数作品来将用于在线传感器数据融合的摄像机和镜头的时间同步和外部校准相关联。时间和空间校准技术正面临缺乏相关性和实时的挑战。在本文中,我们介绍了姿势估计模型和环境鲁棒线的提取,以提高数据融合和即时在线校正能力的相关性。考虑到相邻力矩之间的点云匹配的对应关系,动态目标旨在寻求最佳政策。搜索优化过程旨在以计算精度和效率提供准确的参数。为了证明这种方法的好处,我们以基础真实价值在基蒂基准上进行评估。在在线实验中,与时间校准中的软同步方法相比,我们的方法提高了准确性38.5%。在空间校准时,我们的方法会在0.4秒内自动纠正干扰误差,并达到0.3度的精度。这项工作可以促进传感器融合的研究和应用。
translated by 谷歌翻译
解决纳米级的形态学化相变对各种学科的许多科学和工业应用至关重要。通过组合全场传输X射线显微镜(TXM)和X射线吸收附近边缘结构(XANES)的TXM-XANES成像技术是通过获取具有多能量X的一系列显微镜图像来操作的新兴工具 - 接合并配合以获得化学图。然而,由于系统误差和用于快速采集的低曝光照明,其能力受到差的信噪比差的限制。在这项工作中,通过利用TXM-XANES成像数据的内在属性和子空间建模,我们引入了一种简单且坚固的去噪方法来提高图像质量,这使得能够快速和高灵敏度的化学成像。对合成和实时数据集的广泛实验证明了该方法的优越性。
translated by 谷歌翻译
神经科学领域的研究揭示了情绪模式和脑功能区域之间的关系,展示了不同脑区之间的动态关系是影响通过脑电图(EEG)确定的情绪识别的必要因素。此外,在脑电情绪识别中,我们可以观察到,基于相同的脑电图数据,我们可以观察到粗粒情绪之间的粗粒情绪之间的边界;这表明大型粗糙和小细粒度情绪变化的同意。因此,来自粗糙到细粒度类别的渐进分类过程可能有助于EEG情绪识别。因此,在本研究中,我们提出了一种逐步的图表卷积网络(PGCN),用于捕获EEG情绪信号中的这种固有特性,并逐步学习鉴别性EEG特征。为了适应不同的EEG模式,我们构建了一个双图模块,以表征不同EEG通道之间的内在关系,其中包含神经科学研究的动态功能连接和脑区的静态空间接近信息。此外,通过观察粗糙和细粒度的情绪之间的关系,我们采用双头模块,使PGCN能够逐步了解更多辨别性EEG特征,从粗粒(简单)到细粒度的类别(困难),参考情绪的分层特征。为了验证我们模型的性能,在两个公共数据集中进行了广泛的实验:种子-46和多模态生理情绪数据库(MPED)。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译